51 research outputs found

    Democratization in a passive dendritic tree : an analytical investigation

    Get PDF
    One way to achieve amplification of distal synaptic inputs on a dendritic tree is to scale the amplitude and/or duration of the synaptic conductance with its distance from the soma. This is an example of what is often referred to as “dendritic democracy”. Although well studied experimentally, to date this phenomenon has not been thoroughly explored from a mathematical perspective. In this paper we adopt a passive model of a dendritic tree with distributed excitatory synaptic conductances and analyze a number of key measures of democracy. In particular, via moment methods we derive laws for the transport, from synapse to soma, of strength, characteristic time, and dispersion. These laws lead immediately to synaptic scalings that overcome attenuation with distance. We follow this with a Neumann approximation of Green’s representation that readily produces the synaptic scaling that democratizes the peak somatic voltage response. Results are obtained for both idealized geometries and for the more realistic geometry of a rat CA1 pyramidal cell. For each measure of democratization we produce and contrast the synaptic scaling associated with treating the synapse as either a conductance change or a current injection. We find that our respective scalings agree up to a critical distance from the soma and we reveal how this critical distance decreases with decreasing branch radius

    How spiking neurons give rise to a temporal-feature map

    Get PDF
    A temporal-feature map is a topographic neuronal representation of temporal attributes of phenomena or objects that occur in the outside world. We explain the evolution of such maps by means of a spike-based Hebbian learning rule in conjunction with a presynaptically unspecific contribution in that, if a synapse changes, then all other synapses connected to the same axon change by a small fraction as well. The learning equation is solved for the case of an array of Poisson neurons. We discuss the evolution of a temporal-feature map and the synchronization of the single cells’ synaptic structures, in dependence upon the strength of presynaptic unspecific learning. We also give an upper bound for the magnitude of the presynaptic interaction by estimating its impact on the noise level of synaptic growth. Finally, we compare the results with those obtained from a learning equation for nonlinear neurons and show that synaptic structure formation may profit from the nonlinearity

    What we talk about when we talk about capacitance measured with the voltage-clamp step method

    Get PDF
    Capacitance is a fundamental neuronal property. One common way to measure capacitance is to deliver a small voltage-clamp step that is long enough for the clamp current to come to steady state, and then to divide the integrated transient charge by the voltage-clamp step size. In an isopotential neuron, this method is known to measure the total cell capacitance. However, in a cell that is not isopotential, this measures only a fraction of the total capacitance. This has generally been thought of as measuring the capacitance of the “well-clamped” part of the membrane, but the exact meaning of this has been unclear. Here, we show that the capacitance measured in this way is a weighted sum of the total capacitance, where the weight for a given small patch of membrane is determined by the voltage deflection at that patch, as a fraction of the voltage-clamp step size. This quantifies precisely what it means to measure the capacitance of the “well-clamped” part of the neuron. Furthermore, it reveals that the voltage-clamp step method measures a well-defined quantity, one that may be more useful than the total cell capacitance for normalizing conductances measured in voltage-clamp in nonisopotential cells

    Dendritic Morphology Predicts Pattern Recognition Performance in Multi-compartmental Model Neurons with and without Active Conductances

    Get PDF
    This is an Open Access article published under the Creative Commons Attribution license CC BY 4.0 which allows users to read, copy, distribute and make derivative works, as long as the author of the original work is citedIn this paper we examine how a neuron’s dendritic morphology can affect its pattern recognition performance. We use two different algorithms to systematically explore the space of dendritic morphologies: an algorithm that generates all possible dendritic trees with 22 terminal points, and one that creates representative samples of trees with 128 terminal points. Based on these trees, we construct multi-compartmental models. To assess the performance of the resulting neuronal models, we quantify their ability to discriminate learnt and novel input patterns. We find that the dendritic morphology does have a considerable effect on pattern recognition performance and that the neuronal performance is inversely correlated with the mean depth of the dendritic tree. The results also reveal that the asymmetry index of the dendritic tree does not correlate with the performance for the full range of tree morphologies. The performance of neurons with dendritic tapering is best predicted by the mean and variance of the electrotonic distance of their synapses to the soma. All relationships found for passive neuron models also hold, even in more accentuated form, for neurons with active membranesPeer reviewedFinal Published versio

    Biophysical Basis for Three Distinct Dynamical Mechanisms of Action Potential Initiation

    Get PDF
    Transduction of graded synaptic input into trains of all-or-none action potentials (spikes) is a crucial step in neural coding. Hodgkin identified three classes of neurons with qualitatively different analog-to-digital transduction properties. Despite widespread use of this classification scheme, a generalizable explanation of its biophysical basis has not been described. We recorded from spinal sensory neurons representing each class and reproduced their transduction properties in a minimal model. With phase plane and bifurcation analysis, each class of excitability was shown to derive from distinct spike initiating dynamics. Excitability could be converted between all three classes by varying single parameters; moreover, several parameters, when varied one at a time, had functionally equivalent effects on excitability. From this, we conclude that the spike-initiating dynamics associated with each of Hodgkin's classes represent different outcomes in a nonlinear competition between oppositely directed, kinetically mismatched currents. Class 1 excitability occurs through a saddle node on invariant circle bifurcation when net current at perithreshold potentials is inward (depolarizing) at steady state. Class 2 excitability occurs through a Hopf bifurcation when, despite net current being outward (hyperpolarizing) at steady state, spike initiation occurs because inward current activates faster than outward current. Class 3 excitability occurs through a quasi-separatrix crossing when fast-activating inward current overpowers slow-activating outward current during a stimulus transient, although slow-activating outward current dominates during constant stimulation. Experiments confirmed that different classes of spinal lamina I neurons express the subthreshold currents predicted by our simulations and, further, that those currents are necessary for the excitability in each cell class. Thus, our results demonstrate that all three classes of excitability arise from a continuum in the direction and magnitude of subthreshold currents. Through detailed analysis of the spike-initiating process, we have explained a fundamental link between biophysical properties and qualitative differences in how neurons encode sensory input

    The morphoelectrotonic transform: A graphical approach to dendritic function

    No full text
    Electrotonic structure of dendrites plays a critical role in neuronal computation and plasticity. In this article we develop two novel measures of electrotonic structure that describe intraneuronal signaling in dendrites of arbitrary geometry. The log-attenuation L(ij) measures the efficacy, and the propagation delay P(ij) the speed, of signal transfer between any two points i and j. These measures are additive, in the sense that if j lies between i and k, the total distance L(ik) is just the sum of the partial distances: L(ik) = L(ij) + L(jk), and similarly P(ik) = P(ij) + P(jk). This property serves as the basis for the morphoelectrotonic transform (MET), a graphical mapping from morphological into electrotonic space. In a MET, either P(ij) or L(ij) replace anatomical distance as the fundamental unit and so provide direct functional measures of intraneuronal signaling. The analysis holds for arbitrary transient signals, even those generated by nonlinear conductance changes underlying both synaptic and action potentials. Depending on input location and the measure of interest, a single neuron admits many METs, each emphasizing different functional consequences of the dendritic electrotonic structure. Using a single layer 5 cortical pyramidal neuron, we illustrate a collection of METs that lead to a deeper understanding of the electrical behavior of its dendritic tree. We then compare this cortical cell to representative neurons from other brain regions (cortical layer 2/3 pyramidal, region CA1 hippocampal pyramidal, and cerebellar Purkinje). Finally, we apply the MET to electrical signaling in dendritic spines, and extend this analysis to calcium signaling within spines. Our results demonstrate that the MET provides a powerful tool for obtaining a rapid and intuitive grasp of the functional properties of dendritic trees

    Morphoelectrotonic Transform

    No full text

    Learning in spatially extended dendrites

    Get PDF
    Dendrites are not static structures, new synaptic connections are established and old ones disappear. Moreover, it is now known that plasticity can vary with distance from the soma [1]. Consequently it is of great interest to combine learning algorithms with spatially extended neuron models. In particular this may shed further light on the computational advantages of plastic dendrites, say for direction selectivity or coincidence detection. Direction selective neurons fire for one spatio-temporal input sequence on their dendritic tree but stay silent if the temporal order is reversed [2], whilst "coincidence-detectors" such as those in the auditory brainstem are known to make use of dendrites to detect temporal differences in sound arrival times between ears to an astounding accuracy [3]. Here we develop one such combination of learning and dendritic dynamics by extending the "Spike-Diffuse-Spike" [4] framework of an active dendritic tree to incorporate both artificial (tempotron style [5]) and biological learning rules (STDP style [2])
    • …
    corecore